Flint
The study of short texts in digital politics: Document aggregation for topic modeling
Nakka, Nitheesha, Yalcin, Omer F., Desmarais, Bruce A., Rajtmajer, Sarah, Monroe, Burt
Statistical topic modeling is widely used in political science to study text. Researchers examine documents of varying lengths, from tweets to speeches. There is ongoing debate on how document length affects the interpretability of topic models. We investigate the effects of aggregating short documents into larger ones based on natural units that partition the corpus. In our study, we analyze one million tweets by U.S. state legislators from April 2016 to September 2020. We find that for documents aggregated at the account level, topics are more associated with individual states than when using individual tweets. This finding is replicated with Wikipedia pages aggregated by birth cities, showing how document definitions can impact topic modeling results.
Large-Scale Evaluation of Open-Set Image Classification Techniques
Bisgin, Halil, Palechor, Andres, Suter, Mike, Gรผnther, Manuel
The goal for classification is to correctly assign labels to unseen samples. However, most methods misclassify samples with unseen labels and assign them to one of the known classes. Open-Set Classification (OSC) algorithms aim to maximize both closed and open-set recognition capabilities. Recent studies showed the utility of such algorithms on small-scale data sets, but limited experimentation makes it difficult to assess their performances in real-world problems. Here, we provide a comprehensive comparison of various OSC algorithms, including training-based (SoftMax, Garbage, EOS) and post-processing methods (Maximum SoftMax Scores, Maximum Logit Scores, OpenMax, EVM, PROSER), the latter are applied on features from the former. We perform our evaluation on three large-scale protocols that mimic real-world challenges, where we train on known and negative open-set samples, and test on known and unknown instances. Our results show that EOS helps to improve performance of almost all post-processing algorithms. Particularly, OpenMax and PROSER are able to exploit better-trained networks, demonstrating the utility of hybrid models. However, while most algorithms work well on negative test samples -- samples of open-set classes seen during training -- they tend to perform poorly when tested on samples of previously unseen unknown classes, especially in challenging conditions.
The Law Is Accepting That Age 18--or 21--Is Not Really When Our Brains Become "Mature." We're Not Ready for What That Means.
In a car outside a convenience store in Flint, Michigan, in late 2016, Kemo Parks handed his cousin Dequavion Harris a gun. Things happened quickly after that: Witnesses saw Harris "with his arm up and extended" toward a red truck. The wounded driver sped off but crashed into a tree. EMTs rushed him to the hospital. He was dead on arrival.
Michigan man pleads guilty after murdering, eating testicles of other man met on dating app
Graphic footage: Fox News host Tucker Carlson weighs in on issues facing Americans ahead of the midterm elections on "Tucker Carlson Tonight." A Michigan man pleaded guilty last week to murdering, dismembering and eating the body parts of another man he met on a dating app. Mark David Latunski, 53, of Shiawassee County, Michigan, admitted in court last Thursday that he killed 25-year-old hairdresser Kevin Bacon after luring the University of Michigan-Flint student to his home in December 2019, according to local outlet Mlive.com. Latunski pleaded guilty as charged to mutilation of a body and to open murder, which encompasses murder in the first and second degree. Latunski acknowledged stabbing Bacon in the back and taking parts of his dead body to the kitchen, where he ate them, after meeting the young man on Grindr, which is a hookup app for gay, bisexual and transgender men.
An Algorithm Is Helping a Community Detect Lead Pipes
More than six years after residents of Flint, Michigan, suffered widespread lead poisoning from their drinking water, hundreds of millions of dollars have been spent to improve water quality and bolster the city's economy. But residents still report a type of community PTSD, waiting in long grocery store lines to stock up on bottled water and filters. Media reports Wednesday said former governor Rick Snyder has been charged with neglect of duty for his role in the crisis. Snyder maintains his innocence, but he told Congress in 2016, "Local, state and federal officials--we all failed the families of Flint." One tool that emerged from the crisis is a form of artificial intelligence that could prevent similar problems in other cities where lead poisoning is a serious concern.
How AI Found Flint's Lead Pipes, and Then Humans Lost Them
More than a thousand days after the water problems in Flint, Michigan, became national news, thousands of homes in the city still have lead pipes, from which the toxic metal can leach into the water supply. To remedy the problem, the lead pipes need to be replaced with safer, copper ones. That sounds straightforward, but it is a challenge to figure out which homes have lead pipes in the first place. The City's records are incomplete and inaccurate. And digging up all the pipes would be costly and time-consuming.
AI is helping find lead pipes in Flint, Michigan
The algorithm is saving about $10 million as part of an effort to replace the city's water infrastructure. To catch you up: In 2014, Flint began getting water from Flint River rather than the Detroit water system. Mistreatment of the new water supply, combined with old lead pipes, created contaminated water for residents. Solving the problem: Records that could be used to figure out which houses might be affected by corroded old pipes were missing or incomplete. So the city turned to AI.
Flint water crisis: How AI is finding thousands of hazardous pipes
EFFORTS are under way to replace the lead pipes that have been contaminating the water supply in the city of Flint, Michigan. Nobody knows which of the 55,000 properties are directly affected, but an artificially intelligent algorithm can make accurate guesses. The Flint water crisis began in 2014 when city officials began sourcing water from the local river instead of the Detroit water system. The water wasn't treated properly and corroded lead pipes, causing the heavy metal to leach into drinking water.
ActiveRemediation: The Search for Lead Pipes in Flint, Michigan
Abernethy, Jacob, Chojnacki, Alex, Farahi, Arya, Schwartz, Eric, Webb, Jared
We detail our ongoing work in Flint, Michigan to detect pipes made of lead and other hazardous metals. After elevated levels of lead were detected in residents' drinking water, followed by an increase in blood lead levels in area children, the state and federal governments directed over $125 million to replace water service lines, the pipes connecting each home to the water system. In the absence of accurate records, and with the high cost of determining buried pipe materials, we put forth a number of predictive and procedural tools to aid in the search and removal of lead infrastructure. Alongside these statistical and machine learning approaches, we describe our interactions with government officials in recommending homes for both inspection and replacement, with a focus on the statistical model that adapts to incoming information. Finally, in light of discussions about increased spending on infrastructure development by the federal government, we explore how our approach generalizes beyond Flint to other municipalities nationwide.
A Data Science Approach to Understanding Residential Water Contamination in Flint
Chojnacki, Alex, Dai, Chengyu, Farahi, Arya, Shi, Guangsha, Webb, Jared, Zhang, Daniel T., Abernethy, Jacob, Schwartz, Eric
When the residents of Flint learned that lead had contaminated their water system, the local government made water-testing kits available to them free of charge. The city government published the results of these tests, creating a valuable dataset that is key to understanding the causes and extent of the lead contamination event in Flint. This is the nation's largest dataset on lead in a municipal water system. In this paper, we predict the lead contamination for each household's water supply, and we study several related aspects of Flint's water troubles, many of which generalize well beyond this one city. For example, we show that elevated lead risks can be (weakly) predicted from observable home attributes. Then we explore the factors associated with elevated lead. These risk assessments were developed in part via a crowd sourced prediction challenge at the University of Michigan. To inform Flint residents of these assessments, they have been incorporated into a web and mobile application funded by \texttt{Google.org}. We also explore questions of self-selection in the residential testing program, examining which factors are linked to when and how frequently residents voluntarily sample their water.